169 research outputs found

    Kompensation positionsbezogener Artefakte in Aktivitätserkennung

    Get PDF
    This thesis investigates, how placement variations of electronic devices influence the possibility of using sensors integrated in those devices for context recognition. The vast majority of context recognition research assumes well defined, fixed sen- sor locations. Although this might be acceptable for some application domains (e.g. in an industrial setting), users, in general, will have a hard time coping with these limitations. If one needs to remember to carry dedicated sensors and to adjust their orientation from time to time, the activity recognition system is more distracting than helpful. How can we deal with device location and orientation changes to make context sensing mainstream? This thesis presents a systematic evaluation of device placement effects in context recognition. We first deal with detecting if a device is carried on the body or placed somewhere in the environ- ment. If the device is placed on the body, it is useful to know on which body part. We also address how to deal with sensors changing their position and their orientation during use. For each of these topics some highlights are given in the following. Regarding environmental placement, we introduce an active sampling ap- proach to infer symbolic object location. This approach requires only simple sensors (acceleration, sound) and no infrastructure setup. The method works for specific placements such as "on the couch", "in the desk drawer" as well as for general location classes, such as "closed wood compartment" or "open iron sur- face". In the experimental evaluation we reach a recognition accuracy of 90% and above over a total of over 1200 measurements from 35 specific locations (taken from 3 different rooms) and 12 abstract location classes. To derive the coarse device placement on the body, we present a method solely based on rotation and acceleration signals from the device. It works independent of the device orientation. The on-body placement recognition rate is around 80% over 4 min. of unconstrained motion data for the worst scenario and up to 90% over a 2 min. interval for the best scenario. We use over 30 hours of motion data for the analysis. Two special issues of device placement are orientation and displacement. This thesis proposes a set of heuristics that significantly increase the robustness of motion sensor-based activity recognition with respect to sen- sor displacement. We show how, within certain limits and with modest quality degradation, motion sensor-based activity recognition can be implemented in a displacement tolerant way. We evaluate our heuristics first on a set of synthetic lower arm motions which are well suited to illustrate the strengths and limits of our approach, then on an extended modes of locomotion problem (sensors on the upper leg) and finally on a set of exercises performed on various gym machines (sensors placed on the lower arm). In this example our heuristic raises the dis- placed recognition rate from 24% for a displaced accelerometer, which had 96% recognition when not displaced, to 82%

    Eyewear Computing \u2013 Augmenting the Human with Head-Mounted Wearable Assistants

    Get PDF
    The seminar was composed of workshops and tutorials on head-mounted eye tracking, egocentric vision, optics, and head-mounted displays. The seminar welcomed 30 academic and industry researchers from Europe, the US, and Asia with a diverse background, including wearable and ubiquitous computing, computer vision, developmental psychology, optics, and human-computer interaction. In contrast to several previous Dagstuhl seminars, we used an ignite talk format to reduce the time of talks to one half-day and to leave the rest of the week for hands-on sessions, group work, general discussions, and socialising. The key results of this seminar are 1) the identification of key research challenges and summaries of breakout groups on multimodal eyewear computing, egocentric vision, security and privacy issues, skill augmentation and task guidance, eyewear computing for gaming, as well as prototyping of VR applications, 2) a list of datasets and research tools for eyewear computing, 3) three small-scale datasets recorded during the seminar, 4) an article in ACM Interactions entitled \u201cEyewear Computers for Human-Computer Interaction\u201d, as well as 5) two follow-up workshops on \u201cEgocentric Perception, Interaction, and Computing\u201d at the European Conference on Computer Vision (ECCV) as well as \u201cEyewear Computing\u201d at the ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp)

    Different Languages, Different Questions: Language Versioning in Q&A

    Get PDF
    Question and Answering (Q&A) communities have become effective forums for humans to collaborate and build accurate domain-specific archives of information. Stack Overflow is a prime example of a system which has effectively leveraged Q&A to build a strong archive of computer programming information. However, English is the dominant language in size and scope. To reach a wider audience, Stack Overflow has started language-specific sites. In this paper, we seek to understand how these language version sites are used, and whether they form unique Q&A structures or mirror the English version. The results indicate that each site is structured differently, and that users of different languages have different question asking patterns. The contributions from this work are useful in informing designers of systems attempting to conduct language versioning and provide an argument for developing sites within languages, rather than only providing translated versions

    Adversarial Attacks on Classifiers for Eye-based User Modelling

    Full text link
    An ever-growing body of work has demonstrated the rich information content available in eye movements for user modelling, e.g. for predicting users' activities, cognitive processes, or even personality traits. We show that state-of-the-art classifiers for eye-based user modelling are highly vulnerable to adversarial examples: small artificial perturbations in gaze input that can dramatically change a classifier's predictions. We generate these adversarial examples using the Fast Gradient Sign Method (FGSM) that linearises the gradient to find suitable perturbations. On the sample task of eye-based document type recognition we study the success of different adversarial attack scenarios: with and without knowledge about classifier gradients (white-box vs. black-box) as well as with and without targeting the attack to a specific class, In addition, we demonstrate the feasibility of defending against adversarial attacks by adding adversarial examples to a classifier's training data.Comment: 9 pages, 7 figure

    Detecting an Offset-Adjusted Similarity Score Based on Duchenne Smiles

    Get PDF
    Detecting interpersonal synchrony in the wild through ubiquitous wearable sensing invites promising new social insights as well as the possibility of new interactions between humans-humans and humans-agents. We present the Offset-Adjusted SImilarity Score (OASIS), a real-time method of detecting similarity which we show working on visual detection of Duchenne smile between a pair of users. We conduct a user study survey (N = 27) to measure a user-based interoperability score on smile similarity and compare the user score with OASIS as well as the rolling window Pearson correlation and the Dynamic Time Warping (DTW) method. Ultimately, our results indicate that our algorithm has intrinsic qualities comparable to the user score and measures well to the statistical correlation methods. It takes the temporal offset between the input signals into account with the added benefit of being an algorithm which can be adapted to run in real-time will less computational intensity than traditional time series correlation methods

    Automated data gathering and training tool for personalized "Itchy Nose"

    Get PDF
    In "Itchy Nose" we proposed a sensing technique for detecting finger movements on the nose for supporting subtle and discreet interaction. It uses the electrooculography sensors embedded in the frame of a pair of eyeglasses for data gathering and uses machine-learning technique to classify different gestures. Here we further propose an automated training and visualization tool for its classifier. This tool guides the user to make the gesture in proper timing and records the sensor data. It automatically picks the ground truth and trains a machine-learning classifier with it. With this tool, we can quickly create trained classifier that is personalized for the user and test various gestures.Postprin

    Salient Visual Features to Help Close the Loop in 6D SLAM

    Get PDF
    One fundamental problem in mobile robotics research is _Simultaneous Localization and Mapping_ (SLAM): A mobile robot has to localize itself in an unknown environment, and at the same time generate a map of the surrounding area. One fundamental part of SLAM algorithms is loop closing: The robot detects whether it has reached an area that has been visited before, and uses this information to improve the pose estimate in the next step. In this work, visual camera features are used to assist closing the loop in an existing 6 degree of freedom SLAM (6D SLAM) architecture. For our robotics application we propose and evaluate several detection methods, including salient region detection and maximally stable extremal region detection. The detected regions are encoded using SIFT descriptors and stored in a database. Loops are detected by matching of the images' descriptors. A comparison of the different feature detection methods shows that the combination of salient and maximally stable extremal regions suggested by Newman and Ho performs moderately

    Seeing Our Blind Spots: Smart Glasses-Based Simulation to Increase Design Students’ Awareness of Visual Impairment

    Get PDF
    As the population ages, many will acquire visual impairments. To improve design for these users, it is essential to build awareness of their perspective during everyday routines, especially for design students. Although several visual impairment simulation toolkits exist in both academia and as commercial products, analog, and static visual impairment simulation tools do not simulate effects concerning the user’s eye movements. Meanwhile, VR and video see-through-based AR simulation methods are constrained by smaller fields of view when compared with the natural human visual field and also suffer from vergence-accommodation conflict (VAC) which correlates with visual fatigue, headache, and dizziness. In this paper, we enable an on-the-go, VAC-free, visually impaired experience by leveraging our optical see-through glasses. The FOV of our glasses is approximately 160 degrees for horizontal and 140 degrees for vertical, and participants can experience both losses of central vision and loss of peripheral vision at different severities. Our evaluation (n =14) indicates that the glasses can significantly and effectively reduce visual acuity and visual field without causing typical motion sickness symptoms such as headaches and or visual fatigue. Questionnaires and qualitative feedback also showed how the glasses helped to increase participants’ awareness of visual impairment

    Affective Umbrella – A Wearable System to Visualize Heart and Electrodermal Activity, towards Emotion Regulation through Somaesthetic Appreciation

    Get PDF
    In this paper, we introduce Affective Umbrella, a novel system to record, analyze and visualize physiological data in real time via an umbrella handle. We implement a biofeedback loop design in the system that triggers visualization changes to reflect and regulate emotions through somaesthetic appreciation. We report the methodology, processes, and results of data reliability and visual feedback impact on emotions. We evaluated the system using a real-life user study (n=21) in rainy weather at night. The statistical results demonstrate the potential of applying the visualization of biofeedback to regulate emotional arousal with a significantly higher (p=.0022) score, a lower (p=.0277) dominance than baseline from self-reported SAM Scale, and physiological arousal, which was shown to be significantly increased (p<.0001) with biofeedback in terms of pNN50 and a significant difference in terms of RMSSD. There was no significant difference in terms of emotional valence changes from SAM scale. Furthermore, we compared the difference between two biofeedback patterns (mirror and inversion). The mirror effect was with a significantly higher emotional arousal than the inversion effect (p=.0277) from SAM results and was with a significantly lower RMSSD performance than the inversion effect (p<.0001). This work demonstrates the potential for capturing physiological data using an umbrella handle and using this data to influence a user’s emotional state via lighting effects
    • …
    corecore